8 research outputs found
Fetal Brain Tissue Annotation and Segmentation Challenge Results
In-utero fetal MRI is emerging as an important tool in the diagnosis and
analysis of the developing human brain. Automatic segmentation of the
developing fetal brain is a vital step in the quantitative analysis of prenatal
neurodevelopment both in the research and clinical context. However, manual
segmentation of cerebral structures is time-consuming and prone to error and
inter-observer variability. Therefore, we organized the Fetal Tissue Annotation
(FeTA) Challenge in 2021 in order to encourage the development of automatic
segmentation algorithms on an international level. The challenge utilized FeTA
Dataset, an open dataset of fetal brain MRI reconstructions segmented into
seven different tissues (external cerebrospinal fluid, grey matter, white
matter, ventricles, cerebellum, brainstem, deep grey matter). 20 international
teams participated in this challenge, submitting a total of 21 algorithms for
evaluation. In this paper, we provide a detailed analysis of the results from
both a technical and clinical perspective. All participants relied on deep
learning methods, mainly U-Nets, with some variability present in the network
architecture, optimization, and image pre- and post-processing. The majority of
teams used existing medical imaging deep learning frameworks. The main
differences between the submissions were the fine tuning done during training,
and the specific pre- and post-processing steps performed. The challenge
results showed that almost all submissions performed similarly. Four of the top
five teams used ensemble learning methods. However, one team's algorithm
performed significantly superior to the other submissions, and consisted of an
asymmetrical U-Net network architecture. This paper provides a first of its
kind benchmark for future automatic multi-tissue segmentation algorithms for
the developing human brain in utero.Comment: Results from FeTA Challenge 2021, held at MICCAI; Manuscript
submitte
« Sortir son monstre », le costume grotesque dans le spectacle vivant
International audienc
Head and neck tumor segmentation in PET/CT: The HECKTOR challenge
International audienceThis paper relates the post-analysis of the first edition of the HEad and neCK TumOR (HECKTOR) challenge. This challenge was held as a satellite event of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, and was the first of its kind focusing on lesion segmentation in combined FDG-PET and CT image modalities. The challenge's task is the automatic segmentation of the Gross Tumor Volume (GTV) of Head and Neck (HandN) oropharyngeal primary tumors in FDG-PET/CT images. To this end, the participants were given a training set of 201 cases from four different centers and their methods were tested on a held-out set of 53 cases from a fifth center. The methods were ranked according to the Dice Score Coefficient (DSC) averaged across all test cases. An additional inter-observer agreement study was organized to assess the difficulty of the task from a human perspective. 64 teams registered to the challenge, among which 10 provided a paper detailing their approach. The best method obtained an average DSC of 0.7591, showing a large improvement over our proposed baseline method and the inter-observer agreement, associated with DSCs of 0.6610 and 0.61, respectively. The automatic methods proved to successfully leverage the wealth of metabolic and structural properties of combined PET and CT modalities, significantly outperforming human inter-observer agreement level, semi-automatic thresholding based on PET images as well as other single modality-based methods. This promising performance is one step forward towards large-scale radiomics studies in HandN cancer, obviating the need for error-prone and time-consuming manual delineation of GTVs. (C) 2022 The Authors. Published by Elsevier B.V
A Novel Cloud-Assisted Secure Deep Feature Classification Framework for Cancer Histopathology Images
Why is the winner the best?
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%). The "typical" lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work